Contextual word embedding models such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018) have dramatically improved performance for many natural language processing (NLP) tasks in recent months. However, these models have been minimally explored on specialty corpora, such as clinical text; moreover, in the clinical domain, no publicly-available pre-trained BERT models yet exist. In this work, we address this need by exploring and releasing BERT models for clinical text: one for generic clinical text and another for discharge summaries specifically. We demonstrate that using a domain-specific model yields performance improvements on three common clinical NLP tasks as compared to nonspecific embeddings. These domainspecific models are not as performant on two clinical de-identification tasks, and argue that this is a natural consequence of the differences between de-identified source text and synthetically non de-identified task text.
translated by 谷歌翻译
Scoring the factuality of a generated summary involves measuring the degree to which a target text contains factual information using the input document as support. Given the similarities in the problem formulation, previous work has shown that Natural Language Inference models can be effectively repurposed to perform this task. As these models are trained to score entailment at a sentence level, several recent studies have shown that decomposing either the input document or the summary into sentences helps with factuality scoring. But is fine-grained decomposition always a winning strategy? In this paper we systematically compare different granularities of decomposition -- from document to sub-sentence level, and we show that the answer is no. Our results show that incorporating additional context can yield improvement, but that this does not necessarily apply to all datasets. We also show that small changes to previously proposed entailment-based scoring methods can result in better performance, highlighting the need for caution in model and methodology selection for downstream tasks.
translated by 谷歌翻译
Recent work in large language modeling (LLMs) has used fine-tuning to align outputs with the preferences of a prototypical user. This work assumes that human preferences are static and homogeneous across individuals, so that aligning to a a single "generic" user will confer more general alignment. Here, we embrace the heterogeneity of human preferences to consider a different challenge: how might a machine help people with diverse views find agreement? We fine-tune a 70 billion parameter LLM to generate statements that maximize the expected approval for a group of people with potentially diverse opinions. Human participants provide written opinions on thousands of questions touching on moral and political issues (e.g., "should we raise taxes on the rich?"), and rate the LLM's generated candidate consensus statements for agreement and quality. A reward model is then trained to predict individual preferences, enabling it to quantify and rank consensus statements in terms of their appeal to the overall group, defined according to different aggregation (social welfare) functions. The model produces consensus statements that are preferred by human users over those from prompted LLMs (>70%) and significantly outperforms a tight fine-tuned baseline that lacks the final ranking step. Further, our best model's consensus statements are preferred over the best human-generated opinions (>65%). We find that when we silently constructed consensus statements from only a subset of group members, those who were excluded were more likely to dissent, revealing the sensitivity of the consensus to individual contributions. These results highlight the potential to use LLMs to help groups of humans align their values with one another.
translated by 谷歌翻译
Reformulating the history matching problem from a least-square mathematical optimization problem into a Markov Decision Process introduces a method in which reinforcement learning can be utilized to solve the problem. This method provides a mechanism where an artificial deep neural network agent can interact with the reservoir simulator and find multiple different solutions to the problem. Such formulation allows for solving the problem in parallel by launching multiple concurrent environments enabling the agent to learn simultaneously from all the environments at once, achieving significant speed up.
translated by 谷歌翻译
安全的数字无线通信水下已成为一个关键问题,因为海上运营转向采用机器人资产的异质组合,并且随着数字系统的安全性在所有领域都受到挑战。同时,水下信号编码和物理层选项的增殖提供了更大的带宽和灵活性,但主要没有互操作性所需的标准。我们在这里解决了对安全的基本要求,即对资产身份的确认也称为身份验证。我们建议,实施,验证和验证基于第一个数字水下通信标准的身份验证协议。我们的计划主要适用于在海上石油和天然气设施周围运行的AUV,也适用于将来可能还具有声学调制解调器的其他水下设备。它使包括命令和控制在内的沟通更加安全,并为开发更复杂的安全机制提供了基础。
translated by 谷歌翻译
几种慢性肺疾病,例如特发性肺纤维化(IPF)的特征是气道异常扩张。计算机断层扫描(CT)上气道特征的定量可以帮助表征疾病进展。已经开发了基于物理的气道测量算法,但由于在临床实践中看到的气道形态多样性,因此取得了有限的成功。由于获得精确的气道注释的高成本,监督学习方法也不可行。我们建议使用感知损失通过样式转移进行综合气道,以训练我们的模型气道转移网络(ATN)。我们使用a)定性评估将ATN模型与最先进的GAN网络(SIMGAN)进行比较; b)评估基于ATN和SIMGAN的CT气道指标预测113例IPF患者死亡率的能力。与Simgan相比,ATN被证明更快,更容易训练。还发现基于ATN的气道测量值始终比IPF CTS上的SIMGAN衍生气道指标更强大。通过转化网络使用感知损失来完善合成数据的转化网络是基于GAN的方法的现实替代方法,用于用于特发性肺纤维化的临床CT分析。我们的源代码可以在https://github.com/ashkanpakzad/atn上找到,该源代码与Airquant的现有开放源气道分析框架兼容。
translated by 谷歌翻译
在线自主代理能够利用各种潜在的任务知识来源;但是,目前的方法总是只关注一两个。在这里,我们调查了利用多样化知识源以一记模拟的家用移动机器人的新任务学习的挑战和影响。在SOAR认知体系结构中开发的最终代理使用以下域和任务知识来源:与环境的互动,任务执行和规划知识,人类自然语言指导以及从大语言模型(GPT-3)检索到的响应。我们探讨了这些知识来源的不同贡献,并在学习正确的任务知识,人力工作量和计算成本方面评估了不同组合的性能。结合所有来源的结果表明,整合可以在计算成本和人力工作量方面改善一声任务学习。
translated by 谷歌翻译
受欢迎程度的偏见是,推荐系统将在向用户推荐艺术家时过度偏爱流行艺术家。因此,他们可能会为赢家众多的市场做出贡献,其中少数艺术家几乎受到了所有关注,而同样不太可能被发现。在本文中,我们尝试衡量三种最先进的推荐系统模型(例如Slim,Multi-Vae,WRMF)和三种商用音乐流服务(Spotify,Amazon Music,YouTube)中的流行偏见。我们发现,最准确的模型(Slim)也具有最受欢迎的偏见,而准确的模型的流行性偏差较小。我们还没有根据模拟用户实验发现商业建议中流行偏见的证据。
translated by 谷歌翻译
骨肉瘤是最常见的原发性骨癌,其标准治疗包括术前化疗,然后切除。化学疗法反应用于预测患者的预后和进一步治疗。坏死在切除标本上的组织学幻灯片通常评估了坏死比定义为坏死肿瘤与总体肿瘤之比。已知坏死比> = 90%的患者的预后更好。多个载玻片对坏死比的手动微观综述是半定量性的,并且可能具有观察者间和观察者间的变异性。我们提出了一种基于目标和可再现的深度学习方法,以估计坏死比,并从扫描的苏木精和曙红全幻灯片图像预测结果。我们以3134个WSI的速度收集了103例骨肉瘤病例,以训练我们的深度学习模型,验证坏死比评估并评估结果预测。我们训练了深层多磁化网络,以分割多个组织亚型,包括生存的肿瘤和像素级中的坏死肿瘤,并计算来自多个WSI的病例级坏死比。我们显示了通过分割模型估算的坏死比,高度与由专家手动评估的病理报告中的坏死比高度相关,其中IV级的平均绝对差异(100%),III(> = 90%)和II(> = 50%和<50%和< 90%)坏死反应分别为4.4%,4.5%和17.8%。我们成功地对患者进行了分层,以预测P = 10^-6的总生存率,而P = 0.012的无进展生存率。我们没有可变性的可重现方法使我们能够调整截止阈值,特别是用于模型和数据集的截止阈值,为OS的80%,PFS为60%。我们的研究表明,深度学习可以支持病理学家作为一种客观的工具,可以分析组织学中骨肉瘤,以评估治疗反应并预测患者结果。
translated by 谷歌翻译
FIB/SEM断层扫描代表了电池研究和许多其他领域中三维纳米结构表征的必不可少的工具。然而,在许多情况下,对比度和3D分类/重建问题出现,这极大地限制了该技术的适用性,尤其是在多孔材料上,例如电池或燃料电池中用于电极材料的材料。区分不同的组件(例如主动LI存储颗粒和碳/粘合剂材料)很困难,并且通常可以防止对图像数据进行可靠的定量分析,甚至可能导致关于结构 - 质地关系的错误结论。在这项贡献中,我们提出了一种新型的数据分类方法,该方法是通过FIB/SEM断层扫描获得的三维图像数据及其在NMC电池电极材料中的应用。我们使用两个不同的图像信号,即Angled SE2腔室检测器和Inlens检测器信号的信号,将信号组合在一起并训练一个随机森林,即特定的机器学习算法。我们证明,这种方法可以克服适合多相测量的现有技术的当前局限性,并且即使在当前的最新技术失败或对大型训练集的需求之后,它也可以进行定量数据重建。这种方法可能会作为使用FIB/SEM断层扫描的未来研究指南。
translated by 谷歌翻译